41 research outputs found

    Multitask learning for recognizing stress and depression in social media

    Full text link
    Stress and depression are prevalent nowadays across people of all ages due to the quick paces of life. People use social media to express their feelings. Thus, social media constitute a valuable form of information for the early detection of stress and depression. Although many research works have been introduced targeting the early recognition of stress and depression, there are still limitations. There have been proposed multi-task learning settings, which use depression and emotion (or figurative language) as the primary and auxiliary tasks respectively. However, although stress is inextricably linked with depression, researchers face these two tasks as two separate tasks. To address these limitations, we present the first study, which exploits two different datasets collected under different conditions, and introduce two multitask learning frameworks, which use depression and stress as the main and auxiliary tasks respectively. Specifically, we use a depression dataset and a stressful dataset including stressful posts from ten subreddits of five domains. In terms of the first approach, each post passes through a shared BERT layer, which is updated by both tasks. Next, two separate BERT encoder layers are exploited, which are updated by each task separately. Regarding the second approach, it consists of shared and task-specific layers weighted by attention fusion networks. We conduct a series of experiments and compare our approaches with existing research initiatives, single-task learning, and transfer learning. Experiments show multiple advantages of our approaches over state-of-the-art ones

    Towards e-Government Policy Formation: A Multi-Facet Taxonomy Of e-Government Projects

    Get PDF
    This article is an investigation of the complex phenomenon of national e-Government policy formation. Therefore it contains a literature review of e-Government policy frameworks and a dimensional analysis based on case studies review of the specific management aspects of projects. An e-Government project management taxonomy is proposed consisted from four specific e-Government management dimensions: the project type, the domain sector, the administration level and the beneficiary. The taxonomy is used to map some of the fundamental dimensions required during a national e-Government policy composition. The implications of the taxonomy’s application are discussed. This taxonomy is to assist policy-makers and stakeholders in adapting e-Government strategies for successful e-Government implementation

    A Multimodal Approach for Dementia Detection from Spontaneous Speech with Tensor Fusion Layer

    Full text link
    Alzheimer's disease (AD) is a progressive neurological disorder, meaning that the symptoms develop gradually throughout the years. It is also the main cause of dementia, which affects memory, thinking skills, and mental abilities. Nowadays, researchers have moved their interest towards AD detection from spontaneous speech, since it constitutes a time-effective procedure. However, existing state-of-the-art works proposing multimodal approaches do not take into consideration the inter- and intra-modal interactions and propose early and late fusion approaches. To tackle these limitations, we propose deep neural networks, which can be trained in an end-to-end trainable way and capture the inter- and intra-modal interactions. Firstly, each audio file is converted to an image consisting of three channels, i.e., log-Mel spectrogram, delta, and delta-delta. Next, each transcript is passed through a BERT model followed by a gated self-attention layer. Similarly, each image is passed through a Swin Transformer followed by an independent gated self-attention layer. Acoustic features are extracted also from each audio file. Finally, the representation vectors from the different modalities are fed to a tensor fusion layer for capturing the inter-modal interactions. Extensive experiments conducted on the ADReSS Challenge dataset indicate that our introduced approaches obtain valuable advantages over existing research initiatives reaching Accuracy and F1-score up to 86.25% and 85.48% respectively.Comment: 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI) - Oral Presentatio

    A Review Of Interoperability Standards And Initiatives In Electronic Government

    Get PDF
    Being important at organizational, process and semantic levels, interoperability became a key characteristic of the new electronic government systems and services, over the last decade. As a crucial prerequisite for automated process execution leading to “one-stop” e-Government services, interoperability has been systematically prescribed, since the dawn of the 21st century: Standardization frameworks, that included guidelines ranging from simple statements to well defined international Web-Service standards started to appear at National and Cross-Country levels, powered by governments, the European Union or the United Nations. In parallel, most international software, hardware and service vendors created their own strategies for achieving the goal of open, collaborative, loosely coupled systems and components. The paper presents the main milestones in this fascinating quest that shaped electronic government during the last 10 years, describing National Frameworks, key Pan-European projects, international standardization and main industrial and research achievements. Moreover, the paper describes the next steps needed to achieve interoperability at technical, semantic, organizational, legal or policy level – leading to the transformation of administrative processes and the provision of low-cost, high-quality services to citizens and businesses

    Comparison of Missing Data Imputation Methods using the Framingham Heart study dataset

    Full text link
    Cardiovascular disease (CVD) is a class of diseases that involve the heart or blood vessels and according to World Health Organization is the leading cause of death worldwide. EHR data regarding this case, as well as medical cases in general, contain missing values very frequently. The percentage of missingness may vary and is linked with instrument errors, manual data entry procedures, etc. Even though the missing rate is usually significant, in many cases the missing value imputation part is handled poorly either with case-deletion or with simple statistical approaches such as mode and median imputation. These methods are known to introduce significant bias, since they do not account for the relationships between the dataset's variables. Within the medical framework, many datasets consist of lab tests or patient medical tests, where these relationships are present and strong. To address these limitations, in this paper we test and modify state-of-the-art missing value imputation methods based on Generative Adversarial Networks (GANs) and Autoencoders. The evaluation is accomplished for both the tasks of data imputation and post-imputation prediction. Regarding the imputation task, we achieve improvements of 0.20, 7.00% in normalised Root Mean Squared Error (RMSE) and Area Under the Receiver Operating Characteristic Curve (AUROC) respectively. In terms of the post-imputation prediction task, our models outperform the standard approaches by 2.50% in F1-score.Comment: 2022 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI

    Neural Architecture Search with Multimodal Fusion Methods for Diagnosing Dementia

    Full text link
    Alzheimer's dementia (AD) affects memory, thinking, and language, deteriorating person's life. An early diagnosis is very important as it enables the person to receive medical help and ensure quality of life. Therefore, leveraging spontaneous speech in conjunction with machine learning methods for recognizing AD patients has emerged into a hot topic. Most of the previous works employ Convolutional Neural Networks (CNNs), to process the input signal. However, finding a CNN architecture is a time-consuming process and requires domain expertise. Moreover, the researchers introduce early and late fusion approaches for fusing different modalities or concatenate the representations of the different modalities during training, thus the inter-modal interactions are not captured. To tackle these limitations, first we exploit a Neural Architecture Search (NAS) method to automatically find a high performing CNN architecture. Next, we exploit several fusion methods, including Multimodal Factorized Bilinear Pooling and Tucker Decomposition, to combine both speech and text modalities. To the best of our knowledge, there is no prior work exploiting a NAS approach and these fusion methods in the task of dementia detection from spontaneous speech. We perform extensive experiments on the ADReSS Challenge dataset and show the effectiveness of our approach over state-of-the-art methods.Comment: Accepted at ICASSP 202

    Enabling Semantic Interoperability in e-Government: A System-based Methodological Framework for XML Schema Management at National Level

    Get PDF
    Articulating semantic interoperability in e-Government remains in question as long as the international standardization efforts do not reach a consensus on how to semantically annotate and exchange data, but merely focus on the syntactic aspects by publishing sets of XML Schemas. As one-stop governmental services at national and cross-county level become an imperative, the need for standardized data definitions, codification of existing unstructured information and a framework for managing governmental data in a unified way emerges. Effectively applied to the Greek e-Government National Interoperability Framework, this paper proposes a methodology for designing semantically enriched XML Schemas with which homogenized governmental information complies, based on the UN/CEFACT Core Components Technical Specification (CCTS). A discussion around a prospective architecture for managing large sets of XML Schemas is also motivated in order to recognize the necessary components and the key issues that need to be tackled when designing a Governmental Schema Registry

    Data-driven building energy efficiency prediction based on envelope heat losses using physics-informed neural networks

    Full text link
    The analytical prediction of building energy performance in residential buildings based on the heat losses of its individual envelope components is a challenging task. It is worth noting that this field is still in its infancy, with relatively limited research conducted in this specific area to date, especially when it comes for data-driven approaches. In this paper we introduce a novel physics-informed neural network model for addressing this problem. Through the employment of unexposed datasets that encompass general building information, audited characteristics, and heating energy consumption, we feed the deep learning model with general building information, while the model's output consists of the structural components and several thermal properties that are in fact the basic elements of an energy performance certificate (EPC). On top of this neural network, a function, based on physics equations, calculates the energy consumption of the building based on heat losses and enhances the loss function of the deep learning model. This methodology is tested on a real case study for 256 buildings located in Riga, Latvia. Our investigation comes up with promising results in terms of prediction accuracy, paving the way for automated, and data-driven energy efficiency performance prediction based on basic properties of the building, contrary to exhaustive energy efficiency audits led by humans, which are the current status quo.Comment: 8 pages, 1 figur

    Transfer learning for day-ahead load forecasting: a case study on European national electricity demand time series

    Full text link
    Short-term load forecasting (STLF) is crucial for the daily operation of power grids. However, the non-linearity, non-stationarity, and randomness characterizing electricity demand time series renders STLF a challenging task. Various forecasting approaches have been proposed for improving STLF, including neural network (NN) models which are trained using data from multiple electricity demand series that may not necessary include the target series. In the present study, we investigate the performance of this special case of STLF, called transfer learning (TL), by considering a set of 27 time series that represent the national day-ahead electricity demand of indicative European countries. We employ a popular and easy-to-implement NN model and perform a clustering analysis to identify similar patterns among the series and assist TL. In this context, two different TL approaches, with and without the clustering step, are compiled and compared against each other as well as a typical NN training setup. Our results demonstrate that TL can outperform the conventional approach, especially when clustering techniques are considered
    corecore